Tech Companies Point to Self-Regulatory Strategies Before Senate AI Hearings 

AI techreg

U.S. lawmakers are hitting the ground running on creating an artificial intelligence (AI) policy. 

Big Tech companies are rushing to prove that they can get there first — and do the most — with effective self-regulation. 

After all, failure to ensure that AI products are safe and compliant will undermine both AI’s progress and its promise. 

Consumers and business decision-makers are hearing all about the perils and pitfalls of AI on the news, and the marketplace success of today’s AI platforms may very well hinge on effective self-governance in the absence of federal guardrails.

If or when federal regulation does roll out, those companies with the least exposure to liability will be the ones that are best prepared — giving them the greatest advantage. 

That’s why businesses can’t afford to wait. 

Already, Microsoft has pledged to assume legal responsibility for any copyright infringement that springs from the use of its AI-powered Word, Powerpoint, and enterprise coding tools, adding in a statement that the policy is in place “as long as the customer used the guardrails and content filters we have built into our products.”

For its part, Google has announced a new policy mandating advertisers for the upcoming U.S. election to disclose when the ads they wish to display across any of Google’s platforms (excluding YouTube) have been manipulated or created using AI. 

And AI pioneer OpenAI, while facing an investigation by the Federal Trade Commission (FTC) into its practices at home and a complaint about its compliance with the European Union’s General Data Protection Regulation (GDPR) abroad, recently unveiled a new enterprise-focused solution meant to underscore its commitment to privacy and security. 

Read AlsoAI Policy Group Says Promising Self-Regulation Is Same Thing as No Regulation

Lawmakers Look to Committee Tripleheader for Policy Home Run

The public sector is also starting to move faster on AI oversight, with U.S. lawmakers booking a series of AI hearings this week with leading industry representatives meant to create a workable knowledge foundation to guide AI policy development. 

First up, on Tuesday (Sep. 12), the Senate Judiciary subcommittee on Privacy, Technology, and the Law is holding a hearing titled “Oversight of AI: Legislating on Artificial Intelligence.” The witnesses for the hearing include both NVIDIA’s Chief Scientist and Senior Vice President of Research William Dally and Brad Smith, the Vice Chair and President of Microsoft.

“Top industry executives and leading experts will help us shape legislation to protect against AI harms,” said Senator Richard Blumenthal, the panel’s chairman.

On Wednesday (Sept. 13), U.S. Senate Majority Leader Chuck Schumer is hosting a closed-door session for all 100 Senators to kick off the Senate’s “AI Insight Forum.” 

“It will be a meeting unlike any other that we have seen in the Senate in a very long time, perhaps ever: a coming together of top voices in business, civil rights, defense, research, labor, the arts, all together, in one room, having a much-needed conversation about how Congress can tackle AI,” Schumer said

Those top voices are slated to include OpenAI CEO Sam Altman, former Microsoft CEO Bill Gates, Alphabet CEO Sundar Pichai, Nvidia CEO Jensen Huang, Palantir CEO Alex Karp, X CEO Elon Musk, and Meta CEO Mark Zuckerberg, among over two dozen other invitees. 

Some observers have criticized the list for being heavily slanted toward business executives rather than speakers pulled from academia and other fields. 

On Thursday (Sept. 14), the U.S. government is looking internally, with a House Oversight subcommittee holding a hearing called “How Are Federal Agencies Harnessing Artificial Intelligence?” to examine potential risks in federal agency adoption of AI. Witnesses will be drawn from members of the White House Office of Science and Technology Policy, the Department of Defense and the Department of Homeland Security.

The U.S. hearings come after lawmakers have returned from their summer recess with a new pep in their step; and follows last week’s pledge by G-20 leaders to ensure “responsible AI development, deployment and use,” that would safeguard rights, transparency, privacy and data protection. 

Read AlsoHow AI Regulation Could Shape Three Digital Empires

Big Tech’s Self-Regulation Opening Salvo 

To mitigate the risks inherent to AI, many of the tech companies behind today’s leading large language models (LLMs) agreed to “behave responsibly and ensure their products are safe” as well as work to move the whole AI ecosystem forward in a way that reinforces the safety, security and trustworthiness of the technology.

Some self-regulatory approaches for LLMs include reinforcement learning, ongoing monitoring, human-in-the-loop oversight, and data security access controls and data minimization. 

It is promising that Big Tech and governments are signaling that they are willing to engage meaningfully over proactive AI regulation. Still, a lot of hard work is needed. 

“AI regulation is going to be an ongoing continuous process of interaction between government and the private sector to make sure that the public gets all of the benefits that can come from this technological innovation but also is protected from the harms,” Professor Cary Coglianese, founding director of the Penn Program on Regulation, told PYMNTS last month (Aug. 4).

“If there was an equivalent of a seat belt that we could require be installed with every AI tool, great. But there isn’t a one-size-fits-all action that can be applied. … It’s going to be an all-hands-on-deck kind of approach that we need to take,” he added. 

For all PYMNTS crypto coverage, subscribe to the daily Artificial Intelligence Newsletter.